3 research outputs found

    Dealing with Uncertainty in Image Forensics: a Fuzzy Approach

    No full text
    Image forensics research has mainly focused on the detection of artifacts introduced by a single processing tool. In tamper detec- tion applications, however, the kind of artifacts the forensic analyst should look for is not known beforehand, hence making it necessary that several tools developed for different scenarios are applied. The problem, then, is twofold: i) devise a sound strategy to elaborate the information provided by the different tools into a single output, and ii) deal with the uncertainty introduced by error-prone tools. In this paper, we introduce a framework based on Fuzzy Theory to over- come these problems. We describe a practical implementation of the proposed framework putting the theoretical principles in practice. To validate the proposed approach, we carried out some experiments ad- dressing a simple realistic scenario in which three forensic tools ex- ploit artifacts introduced by JPEG compression to detect cut&paste tampering within a specified region of an image. The results are en- couraging, especially when compared with those obtained by simply XOR-ing the output of the the single detection tools

    Attacking image classification based on Bag-of-Visual-Words

    No full text
    Nowadays, with the widespread diffusion of online image databases, the possibility of easily searching, browsing and filtering image content is more than an urge. Typically, this operation is made possible thanks to the use of tags, i.e., textual representations of semantic concepts associated to the images. The tagging process is either performed by users, who manually label the images, or by automatic image classifiers, so as to reach a broader coverage. Typically, these methods rely on the extraction of local descriptors (e.g., SIFT, SURF, HOG, etc.), the construction of a suitable feature-based representation (e.g., bag-of-visual words), and the use of supervised classifiers (e.g., SVM). In this paper, we show that such a classification procedure can be attacked by a malicious user, who might be interested in altering the tags automatically suggested by the classifier. This might be used, for example, by an attacker who is willing to avoid the automatic detection of improper material in a parental control system. More specifically, we show that it is possible to modify an image in order to have it associated to the wrong class, without perceptually affecting the image visual quality. The proposed method is validated against a well known image dataset, and results prove to be promising, highlighting the need to jointly study the problem from the standpoint of both the analyst and the attacker
    corecore